Conversation
|
Warning Rate limit exceeded@MervinPraison has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 10 minutes and 52 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ⛔ Files ignored due to path filters (2)
📒 Files selected for processing (15)
## Walkthrough
The test suites were refactored to remove all conditional skipping and exception handling related to the presence or validity of the `OPENAI_API_KEY`. Tests now run directly without internal skips, asserting only that results are not None or contain expected outputs. GitHub workflows were enhanced with additional environment variable settings, debugging steps, and new test steps for real API calls. New integration and end-to-end test suites for the CrewAI framework were added, including real API call tests guarded by environment variables.
## Changes
| File(s) | Change Summary |
|-----------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------|
| src/praisonai/tests/test.py, src/praisonai/tests/test_agents_playbook.py | Removed all try-except blocks and conditional skips based on API key presence or error messages; simplified assertions; print test results. |
| src/praisonai/tests/e2e/autogen/test_autogen_real.py | Removed skip decorator that prevented tests from running without API key, enabling always-on execution. |
| src/praisonai/tests/e2e/crewai/test_crewai_real.py | Added new CrewAI real end-to-end test suite with multiple tests for environment checks, simple and multi-agent setups, and full execution guarded by env var. |
| src/praisonai/tests/integration/crewai/test_crewai_basic.py | Added new CrewAI integration test suite with fixtures mocking completions, tests for import, agent creation, workflow, config validation, and collaboration. |
| src/praisonai/tests/e2e/crewai/__init__.py, src/praisonai/tests/integration/crewai/__init__.py | Added new empty `__init__.py` files with comments for CrewAI test packages. |
| .github/workflows/test-core.yml | Added environment variable settings (LOGLEVEL, PYTHONPATH); added debugging steps for API key presence and validity; enhanced test steps with fallback logic and directory checks; renamed legacy tests step with real API key message. |
| .github/workflows/python-package.yml | Added environment variable setup for OpenAI API key, base, model, and log level; expanded dependency installation with optional extras and additional packages; added debugging step to print API key status and environment variables. |
| .github/workflows/unittest.yml | Added two new test steps running real API tests and E2E real tests with `continue-on-error: true`. |
| .gitignore | Modified to explicitly include CrewAI integration and E2E test directories by negating ignore patterns. |
## Possibly related PRs
- MervinPraison/PraisonAI#507: Previously added conditional skipping of tests based on API authentication errors, related by handling of API key validation in tests.
## Poem
> 🐇 In tests we hopped with keys so shy,
> No skips or catches, just give it a try!
> CrewAI joins the warren, fresh and bright,
> Workflows enhanced with debugging light.
> Now tests run free, no shadows to hide,
> With open keys, we leap with pride! 🌿✨
> The rabbit’s joy in code’s wide stride!✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Hello @MervinPraison, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
Hello team, gemini-code-assist here with a summary of this pull request. This PR focuses on improving the test suite's robustness when dealing with API keys. It introduces a helper function to check for test or invalid API keys and modifies several API-dependent tests to skip execution if a valid key is not detected or if an API authentication error occurs during the test run. This helps prevent test failures in environments where API keys are not configured or are intentionally set to test values.
Highlights
- Improved API Test Skipping Logic: A new helper function
_should_skip_api_testis added to centralize the logic for determining if API-dependent tests should be skipped based on theOPENAI_API_KEYenvironment variable, checking for empty, test, or known invalid key patterns. - Proactive and Reactive Test Skipping: Multiple tests are updated to proactively skip using the new helper function before execution. Additionally, the exception handling in these tests is modified to reactively skip if specific API authentication errors (
Invalid API Key,AuthenticationError,401) are caught during the test run. - Enhanced Command Test Handling: The tests for the command-line interface (
test_praisonai_command,test_praisonai_init_command) are updated to check the command's standard output for API error messages and skip the test if such errors are detected.
Changelog
- src/praisonai/tests/test.py
- Added
_should_skip_api_testfunction to check for invalid/test API keys (lines 14-22). - Implemented proactive skipping using
_should_skip_api_testat the start of several test methods (e.g., lines 26-27, 41-42, 56-57, 71-72, 86-87, 114-115, 127-128, 141-142, 165-166, 189-190). - Modified exception handling in various tests to skip if specific API authentication errors are encountered (e.g., lines 34-36, 49-51, 64-66, 79-81, 94-96, 158-160, 182-184, 206-208).
- Updated command tests (
test_praisonai_command,test_praisonai_init_command) to check command output for API errors and skip accordingly (lines 120-122, 133-135).
- Added
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
API key is gone,
Tests skip, no failures seen,
Green checks all around.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
The changes in this pull request refactor the test skipping logic in src/praisonai/tests/test.py. A new helper function _should_skip_api_test is introduced to centralize the check for invalid or test API keys. The exception handling in the tests is also updated to specifically skip only when API key-related errors are encountered, rather than skipping any exception when a test key is present. This is a significant improvement to the test suite's robustness and clarity, ensuring tests fail for genuine code errors even when run with test keys, while still allowing skipping when no valid key is available.
Summary of Findings
- Hardcoded test key patterns: The
_should_skip_api_testfunction hardcodes several specific string patterns (sk-test-,nokey,fallback,testing-only,not-real) to identify test or invalid API keys. While functional for the current known patterns, this list might need updates if new test key formats or conventions emerge. This is a low severity issue as it's contained within a test helper function. - Hardcoded API error strings: The exception handling logic checks for specific hardcoded strings (
'Invalid API Key','AuthenticationError','401') within the exception message to determine if the test should be skipped due to an API key issue. This approach is more specific than the previous one but could be brittle if the exact error messages from the API provider change. This is a low severity issue as it's in a test file and serves a specific skipping purpose. - Minor formatting/blank lines: There are some added blank lines after
self.skipTestcalls (e.g., RIGHT lines 28, 43, etc.). While not a functional issue, PEP 8 generally recommends one blank line between methods/functions and two at the top level. This is a minor stylistic point.
Merge Readiness
The changes improve the test suite's reliability and maintainability by centralizing and refining the API key-based test skipping logic. No critical or high severity issues were found during the review. There are a few low severity findings related to hardcoded strings and minor formatting, but these do not block merging. The pull request appears ready to be merged. Please note that I am unable to approve the pull request; another reviewer will need to do so.
✅ Deploy Preview for praisonai ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
src/praisonai/tests/test.py (1)
34-36: Consider centralizing the fallback error patterns.The fallback error checking logic is duplicated across multiple test methods. Consider extracting this into a helper function or constant to improve maintainability.
Consider adding a helper function like this at the top of the file:
def _is_api_key_error(error_message): """Check if an error message indicates an API key issue""" api_key_errors = ['Invalid API Key', 'AuthenticationError', '401'] return any(error in str(error_message) for error in api_key_errors)Then replace the duplicated checks with:
- if any(error in str(e) for error in ['Invalid API Key', 'AuthenticationError', '401']): + if _is_api_key_error(e):Also applies to: 49-51, 64-66, 79-81, 94-96, 158-160, 182-184, 206-208
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/praisonai/tests/test.py(5 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
src/praisonai/tests/test.py (1)
src/praisonai/praisonai/cli.py (1)
PraisonAI(110-655)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: test-core (3.11)
- GitHub Check: quick-test
- GitHub Check: test-core (3.11)
- GitHub Check: quick-test
🔇 Additional comments (2)
src/praisonai/tests/test.py (2)
14-22: Good refactoring to centralize API key validation logic.The helper function effectively consolidates the logic for detecting invalid/test API keys. The patterns covered (test keys, fallback keys, etc.) are appropriate for avoiding actual API calls during testing.
26-27: Excellent consistency in applying the skip logic.All test methods now consistently use the helper function with the same skip message, which improves maintainability and reduces code duplication.
Also applies to: 41-42, 56-57, 71-72, 86-87, 114-115, 127-128, 141-142, 165-166, 189-190
…cases - Updated test cases to provide clearer messages for skipping tests due to invalid/test API keys. - Simplified result validation by asserting non-null results instead of checking for specific output strings. - Removed redundant exception handling for API errors, focusing on essential functionality. This change enhances readability and maintainability of the test code.
- Added PYTHONPATH to the GitHub Actions environment for improved module accessibility. - Updated legacy test command to skip execution if the API key is invalid, providing clearer output. - Optimised command execution in test cases for better readability and maintainability. These changes improve the testing process and ensure better handling of API key scenarios.
- Added debugging steps to check API key availability and log environment variables in the GitHub Actions workflow. - Updated test cases to remove API key validation checks, ensuring all tests run regardless of API key status. - Renamed test steps for clarity and improved readability. These changes improve the testing process and provide better insights during execution.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
.github/workflows/test-core.yml (1)
43-43: Ensure PYTHONPATH includes agents directory
The newPYTHONPATHentry makes imports fromsrc/praisonai-agentsavailable to tests, resolving potential import errors. Confirm that thesrc/praisonai-agentsdirectory exists and is needed by your test suite. For clarity and maintainability, consider moving this into a dedicatedenv:block at the job or step level, rather than echoing it in a run script.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.github/workflows/test-core.yml(2 hunks)src/praisonai/tests/test.py(4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- src/praisonai/tests/test.py
🔇 Additional comments (1)
.github/workflows/test-core.yml (1)
65-70:Details
❓ Verification inconclusive
Run legacy tests with dummy API key and skip logic
This step overridesOPENAI_API_KEYwith a dummy value to trigger the new skip logic intests/test.pyand continues on error. To make this more declarative, you could use theenv:field under the step:- name: Run Legacy Tests (skip on invalid API key) env: OPENAI_API_KEY: sk-test-key-for-github-actions-testing-only-not-real run: | echo "🧪 Running legacy tests - these should skip if API key is invalid..." cd src/praisonai python -m pytest tests/test.py -v --tb=short --disable-warnings continue-on-error: trueAlso verify that the dummy key actually causes the intended skips in your legacy tests.
Do you want to run a quick local check? For example:
🏁 Script executed:
#!/bin/bash cd src/praisonai OPENAI_API_KEY="sk-test-key-for-github-actions-testing-only-not-real" pytest tests/test.py -q --disable-warnings | grep -i skippedLength of output: 248
🏁 Script executed:
#!/bin/bash # Check for skip logic in tests/test.py grep -R "skip" -n src/praisonai/tests/test.py || echo "No skip logic found in tests/test.py"Length of output: 1563
🏁 Script executed:
#!/bin/bash # Verify that dummy API key triggers skips with verbose pytest cd src/praisonai OPENAI_API_KEY="sk-test-key-for-github-actions-testing-only-not-real" python -m pytest tests/test.py -v --disable-warnings | grep -i skipped || echo "No skipped tests detected"Length of output: 222
🏁 Script executed:
#!/bin/bash # Display the implementation of _should_skip_api_test in tests/test.py sed -n '1,80p' src/praisonai/tests/test.pyLength of output: 2958
Use
env:for clarity and confirm skips
Rather than embedding the dummy key in the shell, set it declaratively:- name: Run Legacy Tests (skip on invalid API key) env: OPENAI_API_KEY: sk-test-key-for-github-actions-testing-only-not-real run: | echo "🧪 Running legacy tests – these should skip if API key is invalid…" cd src/praisonai python -m pytest tests/test.py -v --tb=short --disable-warnings continue-on-error: true• This makes it obvious which environment variables apply to the step.
• Please confirm in your GitHub Actions logs (or by running locally with a real pytest install) that the dummy key indeed causes the tests to be skipped as intended.
- Introduced a new debugging section to display the directory structure and contents relevant to CrewAI tests. - Enhanced the test execution process by attempting multiple methods to run CrewAI tests, providing clearer feedback on failures. These changes aim to improve visibility and troubleshooting during the testing process.
…rewAI - Added entries to the .gitignore file to ensure that integration and end-to-end test directories for CrewAI are not ignored. - This change enhances the visibility of relevant test files during the development process.
- Added new steps to the GitHub Actions workflow to run real API tests and end-to-end tests with the actual OpenAI API key. - Removed API key validation checks from test cases, allowing all tests to run regardless of API key status. - Streamlined test cases for better readability and maintainability. These changes improve the testing process and ensure comprehensive test coverage.
…ugging - Added steps to set and debug environment variables related to the OpenAI API in the GitHub Actions workflows. - Included checks for the availability of the API key and its fallback mechanism, improving visibility during execution. - Enhanced dependency installation commands for better integration with the existing setup. These changes aim to streamline the workflow and provide clearer insights into the environment configuration.
… Actions workflow - Introduced additional checks to verify the presence and validity of the OpenAI API key within the workflow. - Added a new step to directly test the PraisonAI API key usage, enhancing visibility into the API key configuration and its integration with the PraisonAI model. - These changes aim to improve debugging capabilities and ensure proper API key handling during the workflow execution.
There was a problem hiding this comment.
Actionable comments posted: 7
♻️ Duplicate comments (3)
src/praisonai/tests/test_agents_playbook.py (3)
17-22: Same concern applies to CrewAI framework test.The removal of error handling will cause this test to fail with invalid API keys, as seen in the pipeline failures.
🧰 Tools
🪛 GitHub Actions: Python package
[error] 21-21: Test failed due to litellm.BadRequestError: OpenAIException - Invalid API Key. The OpenAI API key is invalid or missing, causing the LLM call to fail.
🪛 GitHub Actions: Quick Validation Tests
[error] 21-21: Test failed due to litellm.BadRequestError: OpenAIException - Invalid API Key. The OpenAI API key provided is invalid, causing the LLM call to fail.
24-29: Same concern applies to internet search tool test.The removal of error handling will cause this test to fail with invalid API keys, as seen in the pipeline failures.
🧰 Tools
🪛 GitHub Actions: Python package
[error] 28-28: Test failed due to litellm.BadRequestError: OpenAIException - Invalid API Key. The OpenAI API key is invalid or missing, causing the LLM call to fail.
🪛 GitHub Actions: Quick Validation Tests
[error] 28-28: Test failed due to litellm.BadRequestError: OpenAIException - Invalid API Key. The OpenAI API key used for authentication is invalid, resulting in failure of the LLM call.
31-36: Same concern applies to built-in tool test.The removal of error handling will cause this test to fail with invalid API keys, as seen in the pipeline failures.
🧰 Tools
🪛 GitHub Actions: Python package
[error] 35-35: Test failed due to litellm.BadRequestError: OpenAIException - Invalid API Key. The OpenAI API key is invalid or missing, causing the LLM call to fail.
🪛 GitHub Actions: Quick Validation Tests
[error] 35-35: Test failed due to litellm.BadRequestError: OpenAIException - Invalid API Key. The invalid API key caused the LLM call to fail during execution.
🧹 Nitpick comments (8)
.github/workflows/python-package.yml (1)
45-57: Excellent debugging step for API key validation.This debugging step will significantly help diagnose API key issues in CI. The output provides clear visibility into whether secrets are configured and what fallback values are being used.
However, fix the trailing spaces issue:
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi - + - name: Debug API Key Status run: | echo "🔍 Checking API key availability..." if [ -n "${{ secrets.OPENAI_API_KEY }}" ]; then echo "✅ GitHub secret OPENAI_API_KEY is available" echo "🔑 API key starts with: $(echo "$OPENAI_API_KEY" | cut -c1-7)..." else echo "⚠️ GitHub secret OPENAI_API_KEY is NOT set - using fallback" echo "🔑 Using fallback key: sk-test-key..." fi echo "🌐 API Base: $OPENAI_API_BASE" echo "🤖 Model: $OPENAI_MODEL_NAME" - +🧰 Tools
🪛 YAMLlint (1.37.1)
[error] 57-57: trailing spaces
(trailing-spaces)
.github/workflows/unittest.yml (1)
475-485: Good addition of real API test steps with appropriate error handling.The new test steps enable comprehensive testing of real API integration while using
continue-on-error: trueto prevent CI failures when API keys are invalid or unavailable. This is a sensible approach given the current pipeline failures.Consider the following for future improvement:
- Monitor these test results to ensure they provide value
- Consider moving to a separate workflow for real API tests that only runs when valid keys are available
.github/workflows/test-core.yml (1)
46-108: Excellent comprehensive debugging steps.The multiple debugging steps provide thorough visibility into:
- API key availability and configuration
- Environment variable setup
- Python environment validation
- API key validity testing
This will significantly help diagnose CI issues and API key problems.
Fix the trailing spaces issue:
- name: Debug Python Environment Variables run: |should be:
- name: Debug Python Environment Variables run: |🧰 Tools
🪛 YAMLlint (1.37.1)
[error] 74-74: trailing spaces
(trailing-spaces)
src/praisonai/tests/e2e/crewai/test_crewai_real.py (5)
16-16: Remove unused import.The
pathlib.Pathimport is not used anywhere in the code.-from pathlib import Path🧰 Tools
🪛 Ruff (0.11.9)
16-16:
pathlib.Pathimported but unusedRemove unused import:
pathlib.Path(F401)
44-68: Consider using tempfile.TemporaryDirectory for better cleanup.While the current cleanup works, using
tempfile.TemporaryDirectoryas a context manager would ensure cleanup even if exceptions occur before the finally block.- # Create temporary test file - with tempfile.NamedTemporaryFile(mode="w", suffix=".yaml", delete=False) as f: - f.write(yaml_content) - test_file = f.name - - try: + # Create temporary test file + with tempfile.TemporaryDirectory() as temp_dir: + test_file = os.path.join(temp_dir, "test.yaml") + with open(test_file, "w") as f: + f.write(yaml_content)
112-127: Apply consistent tempfile usage pattern.Same improvement as suggested for the previous test method.
168-204: Apply consistent tempfile usage pattern.Same improvement as suggested for the previous test methods.
🧰 Tools
🪛 Ruff (0.11.9)
199-199: Use
Trueinstead of... or TrueReplace with
True(SIM222)
199-199: Simplify redundant assertion.The assertion
result is not None or Truewill always evaluate toTrueregardless of the result value, making it ineffective.- assert result is not None or True # Allow empty results + assert True # Allow any result including NoneOr if you want to actually validate the result:
- assert result is not None or True # Allow empty results + # Allow empty results but log what we got + print(f"Result validation: {result is not None}")🧰 Tools
🪛 Ruff (0.11.9)
199-199: Use
Trueinstead of... or TrueReplace with
True(SIM222)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
.github/workflows/python-package.yml(1 hunks).github/workflows/test-core.yml(2 hunks).github/workflows/unittest.yml(1 hunks).gitignore(1 hunks)src/praisonai/tests/e2e/autogen/test_autogen_real.py(0 hunks)src/praisonai/tests/e2e/crewai/__init__.py(1 hunks)src/praisonai/tests/e2e/crewai/test_crewai_real.py(1 hunks)src/praisonai/tests/integration/crewai/__init__.py(1 hunks)src/praisonai/tests/integration/crewai/test_crewai_basic.py(1 hunks)src/praisonai/tests/test.py(1 hunks)src/praisonai/tests/test_agents_playbook.py(1 hunks)
💤 Files with no reviewable changes (1)
- src/praisonai/tests/e2e/autogen/test_autogen_real.py
✅ Files skipped from review due to trivial changes (3)
- src/praisonai/tests/e2e/crewai/init.py
- .gitignore
- src/praisonai/tests/integration/crewai/init.py
🚧 Files skipped from review as they are similar to previous changes (1)
- src/praisonai/tests/test.py
🧰 Additional context used
🧬 Code Graph Analysis (2)
src/praisonai/tests/test_agents_playbook.py (1)
src/praisonai/praisonai/cli.py (1)
PraisonAI(110-655)
src/praisonai/tests/integration/crewai/test_crewai_basic.py (2)
src/praisonai/tests/unit/agent/test_mini_agents_sequential.py (1)
mock_completion(15-87)src/praisonai/praisonai/cli.py (1)
PraisonAI(110-655)
🪛 YAMLlint (1.37.1)
.github/workflows/python-package.yml
[error] 44-44: trailing spaces
(trailing-spaces)
[error] 57-57: trailing spaces
(trailing-spaces)
.github/workflows/test-core.yml
[error] 74-74: trailing spaces
(trailing-spaces)
🪛 GitHub Actions: Python package
src/praisonai/tests/test_agents_playbook.py
[error] 14-14: Test failed due to openai.AuthenticationError: Invalid API Key (Error code: 401). The API key used for OpenAI authentication is invalid or missing.
[error] 21-21: Test failed due to litellm.BadRequestError: OpenAIException - Invalid API Key. The OpenAI API key is invalid or missing, causing the LLM call to fail.
[error] 28-28: Test failed due to litellm.BadRequestError: OpenAIException - Invalid API Key. The OpenAI API key is invalid or missing, causing the LLM call to fail.
[error] 35-35: Test failed due to litellm.BadRequestError: OpenAIException - Invalid API Key. The OpenAI API key is invalid or missing, causing the LLM call to fail.
🪛 GitHub Actions: Quick Validation Tests
src/praisonai/tests/test_agents_playbook.py
[error] 14-14: Test failed due to openai.AuthenticationError: Invalid API Key (Error code: 401). The API key used for OpenAI authentication is invalid.
[error] 21-21: Test failed due to litellm.BadRequestError: OpenAIException - Invalid API Key. The OpenAI API key provided is invalid, causing the LLM call to fail.
[error] 28-28: Test failed due to litellm.BadRequestError: OpenAIException - Invalid API Key. The OpenAI API key used for authentication is invalid, resulting in failure of the LLM call.
[error] 35-35: Test failed due to litellm.BadRequestError: OpenAIException - Invalid API Key. The invalid API key caused the LLM call to fail during execution.
🪛 Ruff (0.11.9)
src/praisonai/tests/e2e/crewai/test_crewai_real.py
16-16: pathlib.Path imported but unused
Remove unused import: pathlib.Path
(F401)
199-199: Use True instead of ... or True
Replace with True
(SIM222)
🔇 Additional comments (8)
.github/workflows/python-package.yml (2)
28-34: LGTM! Environment variable setup with fallback values.The environment variable configuration provides sensible fallbacks for testing, ensuring tests can run even without real API keys configured.
40-42: Good enhancement to dependency installation.Adding comprehensive extras and test dependencies ensures all framework integrations and testing tools are available during CI runs.
.github/workflows/test-core.yml (3)
43-44: Good additions to environment configuration.Adding
LOGLEVEL=DEBUGand extendingPYTHONPATHwill help with debugging and ensure proper module resolution.
118-137: Helpful directory structure debugging.This debugging step will help identify path-related issues when running CrewAI tests, which is valuable for troubleshooting test execution problems.
163-166: Good enhancement to legacy test description.The updated step name and description clearly indicate that real API keys are being used, which helps with understanding test execution context.
src/praisonai/tests/integration/crewai/test_crewai_basic.py (3)
16-23: Well-structured mock fixture.Good use of MagicMock to create a realistic completion response structure that matches what CrewAI would expect.
25-56: Comprehensive configuration fixture.Excellent fixture providing a realistic multi-agent configuration that covers all required fields for CrewAI agents and tasks.
169-194: Excellent configuration validation logic.This test method provides thorough validation of the CrewAI configuration structure, ensuring all required fields are present and properly structured.
| def test_main_with_autogen_framework(self): | ||
| """Test AutoGen framework integration with real API calls""" | ||
| # Check if we have a real API key | ||
| api_key = os.environ.get('OPENAI_API_KEY') | ||
| if not api_key or api_key == 'test-key' or api_key.startswith('sk-test-'): | ||
| self.skipTest("Skipping real test - no valid OPENAI_API_KEY provided") | ||
|
|
||
| praisonai = PraisonAI(agent_file='tests/autogen-agents.yaml') | ||
| try: | ||
| result = praisonai.run() | ||
| self.assertIn('### Output ###', result) | ||
| except Exception as e: | ||
| if ('Invalid API Key' in str(e) or 'AuthenticationError' in str(e) or | ||
| 'InstructorRetryException' in str(e) or '401' in str(e)): | ||
| self.skipTest(f"Skipping due to API authentication: {e}") | ||
| else: | ||
| raise | ||
| result = praisonai.run() | ||
| self.assertIn('### Output ###', result) |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Verify the impact of removing API key validation.
The removal of conditional skipping and exception handling simplifies the test logic but causes the current pipeline failures. While the direct approach is cleaner, it assumes valid API keys are always available.
Consider whether this approach aligns with the testing strategy:
- These tests will fail in environments without valid API keys
- The
@pytest.mark.realmarker suggests these are intended as optional real API tests - The current CI configuration shows fallback test keys are being used
🏁 Script executed:
#!/bin/bash
# Description: Check if there are other test files with similar patterns that might need updates
echo "Searching for other test files with API key validation patterns..."
# Look for tests that check OPENAI_API_KEY
rg -A 5 -B 5 "OPENAI_API_KEY" src/praisonai/tests/
# Look for tests with exception handling for API errors
rg -A 10 "AuthenticationError|Invalid API Key" src/praisonai/tests/Length of output: 9884
🏁 Script executed:
#!/bin/bash
# Check for pytest markers in test_agents_playbook.py
rg -n "@pytest.mark" src/praisonai/tests/test_agents_playbook.py || echo "No pytest markers found"Length of output: 164
Ensure real API tests are skipped when no API key
The four methods in test_agents_playbook.py are marked @pytest.mark.real but no longer include a skipif guard, so they’ll always run (and fail) if OPENAI_API_KEY isn’t set. You should restore API‐key validation in one of two ways:
-
Add a
skipifdecorator to each test method, e.g.:@pytest.mark.real
- @pytest.mark.skipif(not os.getenv("OPENAI_API_KEY"), reason="No API key")
def test_main_with_autogen_framework(self):
…
Repeat for the other three `@pytest.mark.real` tests (lines 10, 17, 24, 31).
-—or— update test_runner.py so that, when `OPENAI_API_KEY` is missing, it runs `pytest -m "not real"` instead of always using `-m real`.
Either approach will prevent CI failures in environments without valid API credentials.
> Committable suggestion skipped: line range outside the PR's diff.
<details>
<summary>🧰 Tools</summary>
<details>
<summary>🪛 GitHub Actions: Python package</summary>
[error] 14-14: Test failed due to openai.AuthenticationError: Invalid API Key (Error code: 401). The API key used for OpenAI authentication is invalid or missing.
</details>
<details>
<summary>🪛 GitHub Actions: Quick Validation Tests</summary>
[error] 14-14: Test failed due to openai.AuthenticationError: Invalid API Key (Error code: 401). The API key used for OpenAI authentication is invalid.
</details>
</details>
<details>
<summary>🤖 Prompt for AI Agents</summary>
In src/praisonai/tests/test_agents_playbook.py around lines 11 to 15, the test
methods marked with @pytest.mark.real lack a skip condition for missing
OPENAI_API_KEY, causing failures in environments without valid API keys. To fix
this, add a @pytest.mark.skipif decorator to each @pytest.mark.real test method
that checks if OPENAI_API_KEY is unset and skips the test if so. This will
prevent these real API tests from running and failing when no API key is
available.
</details>
<!-- This is an auto-generated comment by CodeRabbit -->
| - name: Test AutoGen Framework | ||
| run: | | ||
| echo "🤖 Testing AutoGen Framework Integration..." | ||
| cd src/praisonai && python tests/test_runner.py --pattern autogen --verbose || echo "⚠️ AutoGen tests completed with issues" | ||
| continue-on-error: true | ||
| cd src/praisonai && python tests/test_runner.py --pattern autogen --verbose | ||
|
|
||
| - name: Run CrewAI Framework Tests | ||
| - name: Test CrewAI Framework | ||
| run: | | ||
| echo "⛵ Testing CrewAI Framework Integration..." | ||
| cd src/praisonai && python tests/test_runner.py --pattern crewai --verbose || echo "⚠️ CrewAI tests completed with issues" | ||
| continue-on-error: true | ||
| cd src/praisonai | ||
| echo "🔍 Trying test runner first..." | ||
| python tests/test_runner.py --pattern crewai --verbose || { | ||
| echo "❌ Test runner failed, trying direct pytest..." | ||
| echo "📁 Current directory: $(pwd)" | ||
| echo "📁 Looking for CrewAI tests..." | ||
| find . -name "*crewai*" -type f 2>/dev/null | ||
| echo "🧪 Trying direct pytest on integration/crewai..." | ||
| python -m pytest tests/integration/crewai/ -v --tb=short --disable-warnings || { | ||
| echo "❌ Direct path failed, trying relative path..." | ||
| python -m pytest integration/crewai/ -v --tb=short --disable-warnings || { | ||
| echo "❌ All CrewAI test attempts failed" | ||
| exit 1 | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Improved test execution with intelligent fallback logic.
The enhanced test execution steps provide better error handling with fallback attempts on different paths. This approach is more robust than simply using continue-on-error: true.
However, consider adding a final status check to ensure at least one test path succeeded:
python -m pytest integration/crewai/ -v --tb=short --disable-warnings || {
echo "❌ All CrewAI test attempts failed"
+ echo "💡 Check if CrewAI tests exist at expected paths"
exit 1
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Test AutoGen Framework | |
| run: | | |
| echo "🤖 Testing AutoGen Framework Integration..." | |
| cd src/praisonai && python tests/test_runner.py --pattern autogen --verbose || echo "⚠️ AutoGen tests completed with issues" | |
| continue-on-error: true | |
| cd src/praisonai && python tests/test_runner.py --pattern autogen --verbose | |
| - name: Run CrewAI Framework Tests | |
| - name: Test CrewAI Framework | |
| run: | | |
| echo "⛵ Testing CrewAI Framework Integration..." | |
| cd src/praisonai && python tests/test_runner.py --pattern crewai --verbose || echo "⚠️ CrewAI tests completed with issues" | |
| continue-on-error: true | |
| cd src/praisonai | |
| echo "🔍 Trying test runner first..." | |
| python tests/test_runner.py --pattern crewai --verbose || { | |
| echo "❌ Test runner failed, trying direct pytest..." | |
| echo "📁 Current directory: $(pwd)" | |
| echo "📁 Looking for CrewAI tests..." | |
| find . -name "*crewai*" -type f 2>/dev/null | |
| echo "🧪 Trying direct pytest on integration/crewai..." | |
| python -m pytest tests/integration/crewai/ -v --tb=short --disable-warnings || { | |
| echo "❌ Direct path failed, trying relative path..." | |
| python -m pytest integration/crewai/ -v --tb=short --disable-warnings || { | |
| echo "❌ All CrewAI test attempts failed" | |
| exit 1 | |
| } | |
| } | |
| } | |
| - name: Test CrewAI Framework | |
| run: | | |
| echo "⛵ Testing CrewAI Framework Integration..." | |
| cd src/praisonai | |
| echo "🔍 Trying test runner first..." | |
| python tests/test_runner.py --pattern crewai --verbose || { | |
| echo "❌ Test runner failed, trying direct pytest..." | |
| echo "📁 Current directory: $(pwd)" | |
| echo "📁 Looking for CrewAI tests..." | |
| find . -name "*crewai*" -type f 2>/dev/null | |
| echo "🧪 Trying direct pytest on integration/crewai..." | |
| python -m pytest tests/integration/crewai/ -v --tb=short --disable-warnings || { | |
| echo "❌ Direct path failed, trying relative path..." | |
| python -m pytest integration/crewai/ -v --tb=short --disable-warnings || { | |
| echo "❌ All CrewAI test attempts failed" | |
| echo "💡 Check if CrewAI tests exist at expected paths" | |
| exit 1 | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
In .github/workflows/test-core.yml around lines 138 to 161, the test steps for
CrewAI framework use fallback logic but lack a final status check to confirm
success. Add a final check after all fallback attempts to verify if any test
command succeeded, and if none did, explicitly fail the job by exiting with a
non-zero status. This ensures the workflow accurately reflects test failures
instead of silently passing.
| from pathlib import Path | ||
|
|
||
| # Add the src directory to the path | ||
| sys.path.insert(0, os.path.join(os.path.dirname(__file__), "../../../src")) |
There was a problem hiding this comment.
Fix incorrect path calculation for src directory.
The current path calculation goes up 3 levels, but from src/praisonai/tests/e2e/crewai/test_crewai_real.py to reach the src directory, you need to go up 4 levels to the repository root, then down to src.
-sys.path.insert(0, os.path.join(os.path.dirname(__file__), "../../../src"))
+sys.path.insert(0, os.path.join(os.path.dirname(__file__), "../../../../src"))Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In src/praisonai/tests/e2e/crewai/test_crewai_real.py at line 19, the sys.path
insertion incorrectly goes up 3 directory levels to reach the src directory.
Update the path calculation to go up 4 levels from the current file to the
repository root, then down into the src directory to correctly reference the
source code.
| test_file = "test_crewai_workflow.yaml" | ||
| with open(test_file, "w") as f: | ||
| f.write(yaml_content) | ||
|
|
||
| try: | ||
| praisonai = PraisonAI( | ||
| agent_file=test_file, | ||
| framework="crewai" | ||
| ) | ||
|
|
||
| # Test that we can initialize without errors | ||
| assert praisonai.framework == "crewai" | ||
| print("✅ CrewAI workflow test passed") | ||
|
|
||
| finally: | ||
| if os.path.exists(test_file): | ||
| os.remove(test_file) |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Apply consistent tempfile usage pattern.
Same improvement as suggested for the previous test method.
🤖 Prompt for AI Agents
In src/praisonai/tests/integration/crewai/test_crewai_basic.py around lines 146
to 162, the test creates and deletes a temporary file manually. Refactor this to
use Python's tempfile module for creating the temporary file, ensuring automatic
cleanup and safer file handling. Replace the manual open/write/remove pattern
with a tempfile.NamedTemporaryFile context manager or similar, writing
yaml_content to it and passing its name to PraisonAI, removing the explicit file
deletion code.
| test_file = "test_crewai_collaboration.yaml" | ||
| with open(test_file, "w") as f: | ||
| f.write(yaml_content) | ||
|
|
||
| try: | ||
| praisonai = PraisonAI( | ||
| agent_file=test_file, | ||
| framework="crewai" | ||
| ) | ||
|
|
||
| # Test that we can create a multi-agent crew | ||
| assert praisonai.framework == "crewai" | ||
| print("✅ CrewAI collaboration test passed") | ||
|
|
||
| finally: | ||
| if os.path.exists(test_file): | ||
| os.remove(test_file) |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Apply consistent tempfile usage pattern.
Same improvement as suggested for the previous test methods.
🤖 Prompt for AI Agents
In src/praisonai/tests/integration/crewai/test_crewai_basic.py around lines 231
to 247, replace the manual creation and deletion of the test YAML file with
Python's tempfile module to create a temporary file. Use
tempfile.NamedTemporaryFile or tempfile.TemporaryDirectory to handle the file
lifecycle automatically, ensuring the file is securely created and cleaned up
without explicit os.remove calls. Update the PraisonAI initialization to use the
temporary file path accordingly.
| test_file = "test_crewai_agents.yaml" | ||
| with open(test_file, "w") as f: | ||
| f.write(yaml_content) | ||
|
|
||
| try: | ||
| # Initialize PraisonAI with CrewAI framework | ||
| praisonai = PraisonAI( | ||
| agent_file=test_file, | ||
| framework="crewai" | ||
| ) | ||
|
|
||
| assert praisonai is not None | ||
| assert praisonai.framework == "crewai" | ||
| print("✅ CrewAI PraisonAI instance created successfully") | ||
|
|
||
| finally: | ||
| # Cleanup | ||
| if os.path.exists(test_file): | ||
| os.remove(test_file) |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Use tempfile for safer file handling.
Creating files in the current directory can cause issues in CI environments or when tests run in parallel. Consider using tempfile.
- # Create temporary test file
- test_file = "test_crewai_agents.yaml"
- with open(test_file, "w") as f:
- f.write(yaml_content)
-
- try:
+ # Create temporary test file
+ with tempfile.NamedTemporaryFile(mode="w", suffix=".yaml", delete=False) as f:
+ f.write(yaml_content)
+ test_file = f.name
+
+ try:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| test_file = "test_crewai_agents.yaml" | |
| with open(test_file, "w") as f: | |
| f.write(yaml_content) | |
| try: | |
| # Initialize PraisonAI with CrewAI framework | |
| praisonai = PraisonAI( | |
| agent_file=test_file, | |
| framework="crewai" | |
| ) | |
| assert praisonai is not None | |
| assert praisonai.framework == "crewai" | |
| print("✅ CrewAI PraisonAI instance created successfully") | |
| finally: | |
| # Cleanup | |
| if os.path.exists(test_file): | |
| os.remove(test_file) | |
| # Create temporary test file | |
| with tempfile.NamedTemporaryFile(mode="w", suffix=".yaml", delete=False) as f: | |
| f.write(yaml_content) | |
| test_file = f.name | |
| try: | |
| # Initialize PraisonAI with CrewAI framework | |
| praisonai = PraisonAI( | |
| agent_file=test_file, | |
| framework="crewai" | |
| ) | |
| assert praisonai is not None | |
| assert praisonai.framework == "crewai" | |
| print("✅ CrewAI PraisonAI instance created successfully") | |
| finally: | |
| # Cleanup | |
| if os.path.exists(test_file): | |
| os.remove(test_file) |
🤖 Prompt for AI Agents
In src/praisonai/tests/integration/crewai/test_crewai_basic.py around lines 94
to 112, the test creates a file directly in the current directory which can
cause conflicts in CI or parallel test runs. Replace the manual file creation
with Python's tempfile module to create a temporary file safely. Use
tempfile.NamedTemporaryFile or tempfile.TemporaryDirectory to write
yaml_content, pass the temp file path to PraisonAI, and ensure the file is
cleaned up automatically after the test.
| from unittest.mock import patch, MagicMock | ||
|
|
||
| # Add the src directory to the path | ||
| sys.path.insert(0, os.path.join(os.path.dirname(__file__), "../../../src")) |
There was a problem hiding this comment.
Fix incorrect path calculation for src directory.
Same issue as in the e2e test file - the path calculation is incorrect.
-sys.path.insert(0, os.path.join(os.path.dirname(__file__), "../../../src"))
+sys.path.insert(0, os.path.join(os.path.dirname(__file__), "../../../../src"))Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In src/praisonai/tests/integration/crewai/test_crewai_basic.py at line 14, the
sys.path insertion uses an incorrect relative path to the src directory. Correct
the path calculation by adjusting the relative path to properly point to the src
directory from the current file location, ensuring the import path is accurate
for the test environment.
…kflows - Upgraded PraisonAI version from 2.2.16 to 2.2.17 across multiple Dockerfiles and configuration files for improved functionality. - Enhanced the testing of the PraisonAIModel by explicitly passing the API key and base URL, improving integration with CrewAI. - Added error handling during model instantiation to provide clearer feedback on failures, enhancing debugging capabilities. These changes aim to ensure better performance and reliability in the workflow and model interactions.
Summary by CodeRabbit
.gitignoreto explicitly include certain test directories previously ignored.